Virtual reality and augmented reality (XR) bring increasing demand for 3D content. However, creating high-quality 3D content requires tedious work that a human expert must do. In this work, we study the challenging task of lifting a single image to a 3D object and, for the first time, demonstrate the ability to generate a plausible 3D object with 360{\deg} views that correspond well with the given reference image. By conditioning on the reference image, our model can fulfill the everlasting curiosity for synthesizing novel views of objects from images. Our technique sheds light on a promising direction of easing the workflows for 3D artists and XR designers. We propose a novel framework, dubbed NeuralLift-360, that utilizes a depth-aware neural radiance representation (NeRF) and learns to craft the scene guided by denoising diffusion models. By introducing a ranking loss, our NeuralLift-360 can be guided with rough depth estimation in the wild. We also adopt a CLIP-guided sampling strategy for the diffusion prior to provide coherent guidance. Extensive experiments demonstrate that our NeuralLift-360 significantly outperforms existing state-of-the-art baselines. Project page: https://vita-group.github.io/NeuralLift-360/
translated by 谷歌翻译
Implicit Neural Representations (INRs) encoding continuous multi-media data via multi-layer perceptrons has shown undebatable promise in various computer vision tasks. Despite many successful applications, editing and processing an INR remains intractable as signals are represented by latent parameters of a neural network. Existing works manipulate such continuous representations via processing on their discretized instance, which breaks down the compactness and continuous nature of INR. In this work, we present a pilot study on the question: how to directly modify an INR without explicit decoding? We answer this question by proposing an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR. Our key insight is that spatial gradients of neural networks can be computed analytically and are invariant to translation, while mathematically we show that any continuous convolution filter can be uniformly approximated by a linear combination of high-order differential operators. With these two knobs, INSP-Net instantiates the signal processing operator as a weighted composition of computational graphs corresponding to the high-order derivatives of INRs, where the weighting parameters can be data-driven learned. Based on our proposed INSP-Net, we further build the first Convolutional Neural Network (CNN) that implicitly runs on INRs, named INSP-ConvNet. Our experiments validate the expressiveness of INSP-Net and INSP-ConvNet in fitting low-level image and geometry processing kernels (e.g. blurring, deblurring, denoising, inpainting, and smoothening) as well as for high-level tasks on implicit fields such as image classification.
translated by 谷歌翻译
神经体积表示表明,MLP网络可以通过多视图校准图像来训练MLP网络,以表示场景的几何形状和外观,而无需显式3D监督。对象分割可以根据学习的辐射字段丰富许多下游应用程序。但是,引入手工制作的细分以在复杂的现实世界中定义感兴趣的区域是非平凡且昂贵的,因为它获得了每个视图注释。本文使用NERF进行复杂的现实世界场景来探索对物体分割的自我监督学习。我们的框架,nerf-sos,夫妻对象分割和神经辐射字段,以在场景中的任何视图中分割对象。通过提出一种新颖的合作对比度损失,在外观和几何水平上,NERF-SOS鼓励NERF模型将紧凑的几何学分割簇从其密度字段中提炼出紧凑的几何学分割簇以及自我监督的预训练的预训练的2D视觉特征。可以将自我监督的对象分割框架应用于各种NERF模型,这些模型既可以导致室内和室外场景的照片真实的渲染结果和令人信服的分割。 LLFF,坦克和寺庙数据集的广泛结果验证了NERF-SOS的有效性。它始终超过其他基于图像的自我监督基线,甚至比监督的语义nerf捕捉细节。
translated by 谷歌翻译
通过隐式表示表示视觉信号(例如,基于坐标的深网)在许多视觉任务中都占了上风。这项工作探讨了一个新的有趣的方向:使用可以适用于各种2D和3D场景的广义方法训练风格化的隐式表示。我们对各种隐式函数进行了试点研究,包括基于2D坐标的表示,神经辐射场和签名距离函数。我们的解决方案是一个统一的隐式神经风化框架,称为INS。与Vanilla隐式表示相反,INS将普通隐式函数分解为样式隐式模块和内容隐式模块,以便从样式图像和输入场景中分别编码表示表示。然后,应用合并模块来汇总这些信息并合成样式化的输出。为了使3D场景中的几何形状进行正规化,我们提出了一种新颖的自我鉴定几何形状一致性损失,该损失保留了风格化场景的几何忠诚度。全面的实验是在多个任务设置上进行的,包括对复杂场景的新型综合,隐式表面的风格化以及使用MLP拟合图像。我们进一步证明,学到的表示不仅是连续的,而且在风格上都是连续的,从而导致不同样式之间毫不费力地插值,并以新的混合样式生成图像。请参阅我们的项目页面上的视频以获取更多查看综合结果:https://zhiwenfan.github.io/ins。
translated by 谷歌翻译
尽管神经辐射场(NERF)迅速发展,但稠密的必要性在很大程度上禁止其更广泛的应用。尽管最近的一些作品试图解决这个问题,但它们要么以稀疏的视图(仍然是其中的一些)操作,要么在简单的对象/场景上运行。在这项工作中,我们考虑了一项更雄心勃勃的任务:通过“只看一次”,即仅使用单个视图来训练神经辐射场,而是在现实的复杂视觉场景上。为了实现这一目标,我们提出了一个视图NERF(SINNERF)框架,该框架由精心设计的语义和几何正规化组成。具体而言,Sinnerf构建了一个半监督的学习过程,我们在其中介绍并传播几何标签和语义伪标签,以指导渐进式训练过程。广泛的实验是在复杂的场景基准上进行的,包括NERF合成数据集,本地光场融合数据集和DTU数据集。我们表明,即使在多视图数据集上进行预训练,Sinnerf也可以产生照片现实的新型视图合成结果。在单个图像设置下,Sinnerf在所有情况下都显着胜过当前最新的NERF基线。项目页面:https://vita-group.github.io/sinnerf/
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译